Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Plant Sci ; 15: 1353110, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38708393

RESUMO

Background: Autofluorescence-based imaging has the potential to non-destructively characterize the biochemical and physiological properties of plants regulated by genotypes using optical properties of the tissue. A comparative study of stress tolerant and stress susceptible genotypes of Brassica rapa with respect to newly introduced stress-based phenotypes using machine learning techniques will contribute to the significant advancement of autofluorescence-based plant phenotyping research. Methods: Autofluorescence spectral images have been used to design a stress detection classifier with two classes, stressed and non-stressed, using machine learning algorithms. The benchmark dataset consisted of time-series image sequences from three Brassica rapa genotypes (CC, R500, and VT), extreme in their morphological and physiological traits captured at the high-throughput plant phenotyping facility at the University of Nebraska-Lincoln, USA. We developed a set of machine learning-based classification models to detect the percentage of stressed tissue derived from plant images and identified the best classifier. From the analysis of the autofluorescence images, two novel stress-based image phenotypes were computed to determine the temporal variation in stressed tissue under progressive drought across different genotypes, i.e., the average percentage stress and the moving average percentage stress. Results: The study demonstrated that both the computed phenotypes consistently discriminated against stressed versus non-stressed tissue, with oilseed type (R500) being less prone to drought stress relative to the other two Brassica rapa genotypes (CC and VT). Conclusion: Autofluorescence signals from the 365/400 nm excitation/emission combination were able to segregate genotypic variation during a progressive drought treatment under a controlled greenhouse environment, allowing for the exploration of other meaningful phenotypes using autofluorescence image sequences with significance in the context of plant science.

2.
Front Plant Sci ; 14: 1211409, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38023863

RESUMO

Cosegmentation and coattention are extensions of traditional segmentation methods aimed at detecting a common object (or objects) in a group of images. Current cosegmentation and coattention methods are ineffective for objects, such as plants, that change their morphological state while being captured in different modalities and views. The Object State Change using Coattention-Cosegmentation (OSC-CO2) is an end-to-end unsupervised deep-learning framework that enhances traditional segmentation techniques, processing, analyzing, selecting, and combining suitable segmentation results that may contain most of our target object's pixels, and then displaying a final segmented image. The framework leverages coattention-based convolutional neural networks (CNNs) and cosegmentation-based dense Conditional Random Fields (CRFs) to address segmentation accuracy in high-dimensional plant imagery with evolving plant objects. The efficacy of OSC-CO2 is demonstrated using plant growth sequences imaged with infrared, visible, and fluorescence cameras in multiple views using a remote sensing, high-throughput phenotyping platform, and is evaluated using Jaccard index and precision measures. We also introduce CosegPP+, a dataset that is structured and can provide quantitative information on the efficacy of our framework. Results show that OSC-CO2 out performed state-of-the art segmentation and cosegmentation methods by improving segementation accuracy by 3% to 45%.

3.
Front Plant Sci ; 14: 1084778, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36818836

RESUMO

The emergence timing of a plant, i.e., the time at which the plant is first visible from the surface of the soil, is an important phenotypic event and is an indicator of the successful establishment and growth of a plant. The paper introduces a novel deep-learning based model called EmergeNet with a customized loss function that adapts to plant growth for coleoptile (a rigid plant tissue that encloses the first leaves of a seedling) emergence timing detection. It can also track its growth from a time-lapse sequence of images with cluttered backgrounds and extreme variations in illumination. EmergeNet is a novel ensemble segmentation model that integrates three different but promising networks, namely, SEResNet, InceptionV3, and VGG19, in the encoder part of its base model, which is the UNet model. EmergeNet can correctly detect the coleoptile at its first emergence when it is tiny and therefore barely visible on the soil surface. The performance of EmergeNet is evaluated using a benchmark dataset called the University of Nebraska-Lincoln Maize Emergence Dataset (UNL-MED). It contains top-view time-lapse images of maize coleoptiles starting before the occurrence of their emergence and continuing until they are about one inch tall. EmergeNet detects the emergence timing with 100% accuracy compared with human-annotated ground-truth. Furthermore, it significantly outperforms UNet by generating very high-quality segmented masks of the coleoptiles in both natural light and dark environmental conditions.

4.
Front Plant Sci ; 14: 1003150, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36844082

RESUMO

The paper introduces two novel algorithms for predicting and propagating drought stress in plants using image sequences captured by cameras in two modalities, i.e., visible light and hyperspectral. The first algorithm, VisStressPredict, computes a time series of holistic phenotypes, e.g., height, biomass, and size, by analyzing image sequences captured by a visible light camera at discrete time intervals and then adapts dynamic time warping (DTW), a technique for measuring similarity between temporal sequences for dynamic phenotypic analysis, to predict the onset of drought stress. The second algorithm, HyperStressPropagateNet, leverages a deep neural network for temporal stress propagation using hyperspectral imagery. It uses a convolutional neural network to classify the reflectance spectra at individual pixels as either stressed or unstressed to determine the temporal propagation of stress in the plant. A very high correlation between the soil water content, and the percentage of the plant under stress as computed by HyperStressPropagateNet on a given day demonstrates its efficacy. Although VisStressPredict and HyperStressPropagateNet fundamentally differ in their goals and hence in the input image sequences and underlying approaches, the onset of stress as predicted by stress factor curves computed by VisStressPredict correlates extremely well with the day of appearance of stress pixels in the plants as computed by HyperStressPropagateNet. The two algorithms are evaluated on a dataset of image sequences of cotton plants captured in a high throughput plant phenotyping platform. The algorithms may be generalized to any plant species to study the effect of abiotic stresses on sustainable agriculture practices.

5.
PLoS One ; 16(9): e0257001, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34473794

RESUMO

Cosegmentation is a newly emerging computer vision technique used to segment an object from the background by processing multiple images at the same time. Traditional plant phenotyping analysis uses thresholding segmentation methods which result in high segmentation accuracy. Although there are proposed machine learning and deep learning algorithms for plant segmentation, predictions rely on the specific features being present in the training set. The need for a multi-featured dataset and analytics for cosegmentation becomes critical to better understand and predict plants' responses to the environment. High-throughput phenotyping produces an abundance of data that can be leveraged to improve segmentation accuracy and plant phenotyping. This paper introduces four datasets consisting of two plant species, Buckwheat and Sunflower, each split into control and drought conditions. Each dataset has three modalities (Fluorescence, Infrared, and Visible) with 7 to 14 temporal images that are collected in a high-throughput facility at the University of Nebraska-Lincoln. The four datasets (which will be collected under the CosegPP data repository in this paper) are evaluated using three cosegmentation algorithms: Markov random fields-based, Clustering-based, and Deep learning-based cosegmentation, and one commonly used segmentation approach in plant phenotyping. The integration of CosegPP with advanced cosegmentation methods will be the latest benchmark in comparing segmentation accuracy and finding areas of improvement for cosegmentation methodology.


Assuntos
Produtos Agrícolas/genética , Aprendizado Profundo , Fagopyrum/genética , Helianthus/genética , Processamento de Imagem Assistida por Computador/métodos , Fenótipo , Benchmarking , Análise por Conglomerados , Produção Agrícola , Humanos , Cadeias de Markov
6.
Front Hum Neurosci ; 15: 638052, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33737872

RESUMO

In recent years, multivariate pattern analysis (MVPA) has been hugely beneficial for cognitive neuroscience by making new experiment designs possible and by increasing the inferential power of functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and other neuroimaging methodologies. In a similar time frame, "deep learning" (a term for the use of artificial neural networks with convolutional, recurrent, or similarly sophisticated architectures) has produced a parallel revolution in the field of machine learning and has been employed across a wide variety of applications. Traditional MVPA also uses a form of machine learning, but most commonly with much simpler techniques based on linear calculations; a number of studies have applied deep learning techniques to neuroimaging data, but we believe that those have barely scratched the surface of the potential deep learning holds for the field. In this paper, we provide a brief introduction to deep learning for those new to the technique, explore the logistical pros and cons of using deep learning to analyze neuroimaging data - which we term "deep MVPA," or dMVPA - and introduce a new software toolbox (the "Deep Learning In Neuroimaging: Exploration, Analysis, Tools, and Education" package, DeLINEATE for short) intended to facilitate dMVPA for neuroscientists (and indeed, scientists more broadly) everywhere.

7.
Front Plant Sci ; 11: 521431, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33362806

RESUMO

High throughput image-based plant phenotyping facilitates the extraction of morphological and biophysical traits of a large number of plants non-invasively in a relatively short time. It facilitates the computation of advanced phenotypes by considering the plant as a single object (holistic phenotypes) or its components, i.e., leaves and the stem (component phenotypes). The architectural complexity of plants increases over time due to variations in self-occlusions and phyllotaxy, i.e., arrangements of leaves around the stem. One of the central challenges to computing phenotypes from 2-dimensional (2D) single view images of plants, especially at the advanced vegetative stage in presence of self-occluding leaves, is that the information captured in 2D images is incomplete, and hence, the computed phenotypes are inaccurate. We introduce a novel algorithm to compute 3-dimensional (3D) plant phenotypes from multiview images using voxel-grid reconstruction of the plant (3DPhenoMV). The paper also presents a novel method to reliably detect and separate the individual leaves and the stem from the 3D voxel-grid of the plant using voxel overlapping consistency check and point cloud clustering techniques. To evaluate the performance of the proposed algorithm, we introduce the University of Nebraska-Lincoln 3D Plant Phenotyping Dataset (UNL-3DPPD). A generic taxonomy of 3D image-based plant phenotypes are also presented to promote 3D plant phenotyping research. A subset of these phenotypes are computed using computer vision algorithms with discussion of their significance in the context of plant science. The central contributions of the paper are (a) an algorithm for 3D voxel-grid reconstruction of maize plants at the advanced vegetative stages using images from multiple 2D views; (b) a generic taxonomy of 3D image-based plant phenotypes and a public benchmark dataset, i.e., UNL-3DPPD, to promote the development of 3D image-based plant phenotyping research; and (c) novel voxel overlapping consistency check and point cloud clustering techniques to detect and isolate individual leaves and stem of the maize plants to compute the component phenotypes. Detailed experimental analyses demonstrate the efficacy of the proposed method, and also show the potential of 3D phenotypes to explain the morphological characteristics of plants regulated by genetic and environmental interactions.

8.
Environ Monit Assess ; 192(12): 776, 2020 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-33219864

RESUMO

Contamination from pesticides and nitrate in groundwater is a significant threat to water quality in general and agriculturally intensive regions in particular. Three widely used machine learning models, namely, artificial neural networks (ANN), support vector machines (SVM), and extreme gradient boosting (XGB), were evaluated for their efficacy in predicting contamination levels using sparse data with non-linear relationships. The predictive ability of the models was assessed using a dataset consisting of 303 wells across 12 Midwestern states in the USA. Multiple hydrogeologic, water quality, and land use features were chosen as the independent variables, and classes were based on measured concentration ranges of nitrate and pesticide. This study evaluates the classification performance of the models for two, three, and four class scenarios and compares them with the corresponding regression models. The study also examines the issue of class imbalance and tests the efficacy of three class imbalance mitigation techniques: oversampling, weighting, and oversampling and weighting, for all the scenarios. The models' performance is reported using multiple metrics, both insensitive to class imbalance (accuracy) and sensitive to class imbalance (F1 score and MCC). Finally, the study assesses the importance of features using game-theoretic Shapley values to rank features consistently and offer model interpretability.


Assuntos
Monitoramento Ambiental , Água Subterrânea , Aprendizado de Máquina , Redes Neurais de Computação , Máquina de Vetores de Suporte
9.
Front Neurosci ; 14: 417, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32425753

RESUMO

Many recent developments in machine learning have come from the field of "deep learning," or the use of advanced neural network architectures and techniques. While these methods have produced state-of-the-art results and dominated research focus in many fields, such as image classification and natural language processing, they have not gained as much ground over standard multivariate pattern analysis (MVPA) techniques in the classification of electroencephalography (EEG) or other human neuroscience datasets. The high dimensionality and large amounts of noise present in EEG data, coupled with the relatively low number of examples (trials) that can be reasonably obtained from a sample of human subjects, lead to difficulty training deep learning models. Even when a model successfully converges in training, significant overfitting can occur despite the presence of regularization techniques. To help alleviate these problems, we present a new method of "paired trial classification" that involves classifying pairs of EEG recordings as coming from the same class or different classes. This allows us to drastically increase the number of training examples, in a manner akin to but distinct from traditional data augmentation approaches, through the combinatorics of pairing trials. Moreover, paired trial classification still allows us to determine the true class of a novel example (trial) via a "dictionary" approach: compare the novel example to a group of known examples from each class, and determine the final class via summing the same/different decision values within each class. Since individual trials are noisy, this approach can be further improved by comparing a novel individual example with a "dictionary" in which each entry is an average of several examples (trials). Even further improvements can be realized in situations where multiple samples from a single unknown class can be averaged, thus permitting averaged signals to be compared with averaged signals.

10.
Front Plant Sci ; 10: 508, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31068958

RESUMO

The complex interaction between a genotype and its environment controls the biophysical properties of a plant, manifested in observable traits, i.e., plant's phenome, which influences resources acquisition, performance, and yield. High-throughput automated image-based plant phenotyping refers to the sensing and quantifying plant traits non-destructively by analyzing images captured at regular intervals and with precision. While phenomic research has drawn significant attention in the last decade, extracting meaningful and reliable numerical phenotypes from plant images especially by considering its individual components, e.g., leaves, stem, fruit, and flower, remains a critical bottleneck to the translation of advances of phenotyping technology into genetic insights due to various challenges including lighting variations, plant rotations, and self-occlusions. The paper provides (1) a framework for plant phenotyping in a multimodal, multi-view, time-lapsed, high-throughput imaging system; (2) a taxonomy of phenotypes that may be derived by image analysis for better understanding of morphological structure and functional processes in plants; (3) a brief discussion on publicly available datasets to encourage algorithm development and uniform comparison with the state-of-the-art methods; (4) an overview of the state-of-the-art image-based high-throughput plant phenotyping methods; and (5) open problems for the advancement of this research field.

11.
Plant Methods ; 14: 35, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29760766

RESUMO

BACKGROUND: Image-based plant phenotyping facilitates the extraction of traits noninvasively by analyzing large number of plants in a relatively short period of time. It has the potential to compute advanced phenotypes by considering the whole plant as a single object (holistic phenotypes) or as individual components, i.e., leaves and the stem (component phenotypes), to investigate the biophysical characteristics of the plants. The emergence timing, total number of leaves present at any point of time and the growth of individual leaves during vegetative stage life cycle of the maize plants are significant phenotypic expressions that best contribute to assess the plant vigor. However, image-based automated solution to this novel problem is yet to be explored. RESULTS: A set of new holistic and component phenotypes are introduced in this paper. To compute the component phenotypes, it is essential to detect the individual leaves and the stem. Thus, the paper introduces a novel method to reliably detect the leaves and the stem of the maize plants by analyzing 2-dimensional visible light image sequences captured from the side using a graph based approach. The total number of leaves are counted and the length of each leaf is measured for all images in the sequence to monitor leaf growth. To evaluate the performance of the proposed algorithm, we introduce University of Nebraska-Lincoln Component Plant Phenotyping Dataset (UNL-CPPD) and provide ground truth to facilitate new algorithm development and uniform comparison. The temporal variation of the component phenotypes regulated by genotypes and environment (i.e., greenhouse) are experimentally demonstrated for the maize plants on UNL-CPPD. Statistical models are applied to analyze the greenhouse environment impact and demonstrate the genetic regulation of the temporal variation of the holistic phenotypes on the public dataset called Panicoid Phenomap-1. CONCLUSION: The central contribution of the paper is a novel computer vision based algorithm for automated detection of individual leaves and the stem to compute new component phenotypes along with a public release of a benchmark dataset, i.e., UNL-CPPD. Detailed experimental analyses are performed to demonstrate the temporal variation of the holistic and component phenotypes in maize regulated by environment and genetic variation with a discussion on their significance in the context of plant science.

12.
J Speech Lang Hear Res ; 59(1): 15-26, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26564030

RESUMO

PURPOSE: The authors sought to determine an optimal set of flesh points on the tongue and lips for classifying speech movements. METHOD: The authors used electromagnetic articulographs (Carstens AG500 and NDI Wave) to record tongue and lip movements from 13 healthy talkers who articulated 8 vowels, 11 consonants, a phonetically balanced set of words, and a set of short phrases during the recording. We used a machine-learning classifier (support-vector machine) to classify the speech stimuli on the basis of articulatory movements. We then compared classification accuracies of the flesh-point combinations to determine an optimal set of sensors. RESULTS: When data from the 4 sensors (T1: the vicinity between the tongue tip and tongue blade; T4: the tongue-body back; UL: the upper lip; and LL: the lower lip) were combined, phoneme and word classifications were most accurate and were comparable with the full set (including T2: the tongue-body front; and T3: the tongue-body front). CONCLUSION: We identified a 4-sensor set--that is, T1, T4, UL, LL--that yielded a classification accuracy (91%-95%) equivalent to that using all 6 sensors. These findings provide an empirical basis for selecting sensors and their locations for scientific and emerging clinical applications that incorporate articulatory movements.


Assuntos
Lábio , Medida da Produção da Fala/métodos , Língua , Adulto , Idoso , Fenômenos Biomecânicos , Humanos , Lábio/fisiologia , Pessoa de Meia-Idade , Fonética , Fala/fisiologia , Máquina de Vetores de Suporte , Língua/fisiologia , Adulto Jovem
13.
J Speech Lang Hear Res ; 56(5): 1539-51, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23838988

RESUMO

PURPOSE: To quantify the articulatory distinctiveness of 8 major English vowels and 11 English consonants based on tongue and lip movement time series data using a data-driven approach. METHOD: Tongue and lip movements of 8 vowels and 11 consonants from 10 healthy talkers were collected. First, classification accuracies were obtained using 2 complementary approaches: (a) Procrustes analysis and (b) a support vector machine. Procrustes distance was then used to measure the articulatory distinctiveness among vowels and consonants. Finally, the distance (distinctiveness) matrices of different vowel pairs and consonant pairs were used to derive articulatory vowel and consonant spaces using multidimensional scaling. RESULTS: Vowel classification accuracies of 91.67% and 89.05% and consonant classification accuracies of 91.37% and 88.94% were obtained using Procrustes analysis and a support vector machine, respectively. Articulatory vowel and consonant spaces were derived based on the pairwise Procrustes distances. CONCLUSIONS: The articulatory vowel space derived in this study resembled the long-standing descriptive articulatory vowel space defined by tongue height and advancement. The articulatory consonant space was consistent with feature-based classification of English consonants. The derived articulatory vowel and consonant spaces may have clinical implications, including serving as an objective measure of the severity of articulatory impairment.


Assuntos
Fonação/fisiologia , Fonética , Inteligibilidade da Fala/fisiologia , Fala/fisiologia , Voz/fisiologia , Adulto , Fenômenos Biomecânicos/fisiologia , Feminino , Humanos , Lábio/fisiologia , Pessoa de Meia-Idade , Movimento/fisiologia , Acústica da Fala , Língua/fisiologia , Adulto Jovem
14.
Meat Sci ; 95(1): 42-50, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23648431

RESUMO

The objective of this study was to develop a non-destructive method for classifying cooked-beef tenderness using hyperspectral imaging of optical scattering on fresh beef muscle tissue. A hyperspectral imaging system (λ=922-1739 nm) was used to collect hyperspectral scattering images of the longissimus dorsi muscle (n=472). A modified Lorentzian function was used to fit optical scattering profiles at each wavelength. After removing highly correlated parameters extracted from the Lorentzian function, principal component analysis was performed. Four principal component scores were used in a linear discriminant model to classify beef tenderness. In a validation data set (n=118 samples), the model was able to successfully classify tough and tender samples with 83.3% and 75.0% accuracies, respectively. Presence of fat flecks did not have a significant effect on beef tenderness classification accuracy. The results demonstrate that hyperspectral imaging of optical scattering is a viable technology for beef tenderness classification.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Carne/análise , Modelos Teóricos , Músculo Esquelético/química , Animais , Calibragem , Bovinos , Culinária , Análise de Componente Principal
15.
Am J Physiol Regul Integr Comp Physiol ; 305(3): R291-9, 2013 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-23720135

RESUMO

Peripheral arterial disease (PAD), which affects ~10 million Americans, is characterized by atherosclerosis of the noncoronary arteries. PAD produces a progressive accumulation of ischemic injury to the legs, manifested as a gradual degradation of gastrocnemius histology. In this study, we evaluated the hypothesis that quantitative morphological parameters of gastrocnemius myofibers change in a consistent manner during the progression of PAD, provide an objective grading of muscle degeneration in the ischemic limb, and correlate to a clinical stage of PAD. Biopsies were collected with a Bergström needle from PAD patients with claudication (n = 18) and critical limb ischemia (CLI; n = 19) and control patients (n = 19). Myofiber sarcolemmas and myosin heavy chains were labeled for fluorescence detection and quantitative analysis of morphometric variables, including area, roundness, perimeter, equivalent diameter, major and minor axes, solidity, and fiber density. The muscle specimens were separated into training and validation data sets for development of a discriminant model for categorizing muscle samples on the basis of disease severity. The parameters for this model included standard deviation of roundness, standard deviation of solidity of myofibers, and fiber density. For the validation data set, the discriminant model accurately identified control (80.0% accuracy), claudicating (77.7% accuracy), and CLI (88.8% accuracy) patients, with an overall classification accuracy of 82.1%. Myofiber morphometry provided a discriminant model that establishes a correlation between PAD progression and advancing muscle degeneration. This model effectively separated PAD and control patients and provided a grading of muscle degeneration within clinical stages of PAD.


Assuntos
Músculo Esquelético/patologia , Doença Arterial Periférica/patologia , Idoso , Algoritmos , Biópsia , Análise Discriminante , Progressão da Doença , Feminino , Corantes Fluorescentes , Humanos , Processamento de Imagem Assistida por Computador , Modelos Lineares , Masculino , Microscopia de Fluorescência , Pessoa de Meia-Idade , Modelos Biológicos , Fibras Musculares Esqueléticas/patologia , Miosinas/metabolismo , Sarcolema/patologia
16.
Anat Res Int ; 2012: 604543, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22567312

RESUMO

Shape analysis is useful for a wide variety of disciplines and has many applications. There are many approaches to shape analysis, one of which focuses on the analysis of shapes that are represented by the coordinates of predefined landmarks on the object. This paper discusses Tridimensional Regression, a technique that can be used for mapping images and shapes that are represented by sets of three-dimensional landmark coordinates, for comparing and mapping 3D anatomical structures. The degree of similarity between shapes can be quantified using the tridimensional coefficient of determination (R(2)). An experiment was conducted to evaluate the effectiveness of this technique to correctly match the image of a face with another image of the same face. These results were compared to the R(2) values obtained when only two dimensions are used and show that using three dimensions increases the ability to correctly match and discriminate between faces.

17.
Int J Comput Vis Robot ; 2(2)2011 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-24358055

RESUMO

Web usability measures the ease of use of a website. This study attempts to find the effect of three factors - font size, italics, and colour count - on web usability. The study was performed using a set of tasks and developing a survey questionnaire. We performed the study using a set of human subjects, selected from the undergraduate students taking courses in psychology. The data computed from the tasks and survey questionnaire were statistically analysed to find if there was any effect of font size, italics, and colour count on the three web usability dimensions. We found that for the student population considered, there was no significant effect of font size on usability. However, the manipulation of italics and colour count did influence some aspects of usability. The subjects performed better for pages with no italics and high italics compared to moderate italics. The subjects rated the pages that contained only one colour higher than the web pages with four or six colours. This research will help web developers better understand the effect of font size, italics, and colour count on web usability in general, and for young adults, in particular.

18.
Int Conf Signal Process Commun ; : 1-6, 2009 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-21743845

RESUMO

A new approach of recognizing vowels from articulatory position time-series data was proposed and tested in this paper. This approach directly mapped articulatory position time-series data to vowels without extracting articulatory features such as mouth opening. The input time-series data were time-normalized and sampled to fixed-width vectors of articulatory positions. Three commonly used classifiers, Neural Network, Support Vector Machine and Decision Tree were used and their performances were compared on the vectors. A single speaker dataset of eight major English vowels acquired using Electromagnetic Articulograph (EMA) AG500 was used. Recognition rate using cross validation ranged from 76.07% to 91.32% for the three classifiers. In addition, the trained decision trees were consistent with articulatory features commonly used to descriptively distinguish vowels in classical phonetics. The findings are intended to improve the accuracy and response time of a real-time articulatory-to-acoustics synthesizer.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...